Web Survey Bibliography
Survey research has historically relied on a probabilistic model to underlie its sampling frame. With rare exception online research is non
‐probabilistic. Research without the safety net of a probabilistic frame raises all kinds of alarms. Challenges as to the reliability of online research has become a growing crescendo as the ‐probabilistic nature of online research has become evident. However, not all sampling frames must be probabilistic. Unfortunately, no such standard metrics exist to track reliability in online sampling. In fact, whether they are access panels or social networks there are no standardized means of balancing panels or even comparing them. To confound the situation the commercially used convenience panels are vastly different from each other (Gittelman and Trimarchi, CASRO Panel Conference, February 2009, paper available). These differences are so far reaching that those who elect to use these sample sources are not only without a safety net, they are at considerable professional risk. We have completed analysis of eighteen American panels and have found that respondent aging, frequency of professional responders, other satisficing behaviors as well as dramatic differences between sociologic, psychographic and buying behavior segmentations make for a cacophony of differences seemingly impossible to correct. ‐panel comparisons themselves are rare with data from a very few having been presented on any scale. ‐liners, invalids, inconsistencies, etc.] for which we have developed standard quantitative measures, and (2) a mechanism for developing a family of sampling standards based upon segmentation by key variables such as, but not limited to, media, purchasing and psychographics. It is the new availability of global data that allows us to present universal standards that help us meet necessary requirements that are our focus in this conference. In addition, to measuring performance, we believe that there are three key requirements for standard panel metrics including: (1) the ability to capture panel performance variations consistent with the differing needs of sample users, (i.e. a broadcasting company might wish to anchor its sampling frame to media segments); (2) The ability to create a data base that is retrospective in that new sample sources can be added to the database without repeating the analysis and (3) a focus on indices that are pragmatic in their measure (i.e. We always view buying behavior as the most pragmatic.)
non
In this study we will present the results of an extensive global study covering forty countries. Within each country panels will be compared using a 17 minute questionnaire, 400 completes per panel. We hope to present five or more providers per market. No such extensive comparison has been done on a global basis. In fact, inter
Preliminary data (24000 interviews) shows evolutionary trends in convenience panel development. Between panel differences appear more extreme in the United States than in other markets.
We are proposing two sets of practices: (1) using panel performance metrics [professionals, speeders, straight
In this talk, we propose to use segmentation analysis as a new metric that will allow us to anchor online data in a new non probabilistic sampling frame. It is the existence of global data that gives us a rare opportunity to experiment with this new methodology. Our goal is to use segmentation in each country to create a fingerprint that can be consistently maintained by blending panels. By minimizing the variability from the segments through optimization and panel combination we will establish a means for stabilizing online data irrespective of the panels and sourcing modes from which they draw their origin. We cannot stabilize online data unless we provide it with a reference point to anchor itself; the segments are that anchor. As the sourcing models continue to shift, panels will age and shift with them; we need a reliable anchor that rises above these problems. It is essential that we explore tools to measure these changes. Without a means of comparison we cannot expect to measure drift nor can we expect to have a platform for predicting the future. We do not profess to be on the road to a new probabilistic framework but rather a platform for comparison and continuity. We believe that there is a theoretical population online that can serve this purpose. Using the database we have gathered that includes respondents from over 160 global panels (64,000 interviews) distributed among 40 global markets we shall introduce new methods to build “perspective”.
Based on this we will use our segmentation models as a means of creating a “convenience” sampling frame by averaging segments into a “Grand Mean.” Using optimization models we will select convenience panels that best reflect the grand mean and the proportions by which they best fit together. We shall give evidence for the efficiency of these strategies.
Conference homepage (abstract)
Web survey bibliography (439)
- Answering Without Reading: IMCs and Strong Satisficing in Online Surveys; 2017; Anduiza, E.; Galais, C.
- Ideal and maximum length for a web survey; 2017; Revilla, M.; Ochoa, C.
- Web Survey Gamification - Increasing Data Quality in Web Surveys by Using Game Design Elements; 2017; Schacht, S.; Keusch, F.; Bergmann, N.; Morana, S.
- Effects of sampling procedure on data quality in a web survey; 2017; Rimac, I.; Ogresta, J.
- Comparability of web and telephone surveys for the measurement of subjective well-being; 2017; Sarracino, F.; Riillo, C. F. A.; Mikucka, M.
- Fieldwork monitoring and managing with time-related paradata; 2017; Vandenplas, C.
- Interviewer Gender and Survey Responses: The Effects of Humanizing Cues Variations; 2017; Jablonski, W.; Krzewinska, A.; Grzeszkiewicz-Radulska, K.
- Millennials and emojis in Spain and Mexico.; 2017; Bosch Jover, O.; Revilla, M.
- Nonresponses as context-sensitive response behaviour of participants in online-surveys and their relevance...; 2017; Wetzlehuetter, D.
- Humanizing Cues in Internet Surveys: Investigating Respondent Cognitive Processes; 2017; Jablonski, W.; Grzeszkiewicz-Radulska, K.; Krzewinska, A.
- Pushing to web in the ISSP; 2017; Jonsdottir, G. A.; Dofradottir, A. G.; Einarsson, H. B.
- Rates, Delays, and Completeness of General Practitioners’ Responses to a Postal Versus Web-Based...; 2017; Sebo, P.; Maisonneuve, H.; Cerutti, B.; Pascal Fournier, J.; Haller, D. M.
- Oversampling as a methodological strategy for the study of self-reported health among lesbian, gay and...; 2017; Anderssen, N.; Malterud, K.
- Utjecaj vizualne orientacije skale za odgovaranje i broja stranica web-upitnika na rezultate ispitivanja...; 2017; Malikovic, M.; Svegar, D.; Somodzi, S.
- How to Design a Web Survey Using Spring Boot With MYSQL: a Romanien Network Case Study; 2017; Bucea-Manea-Tonis, Ro.; Bucea-Manea-Tonis, Ra.
- Analyzing Survey Characteristics, Participation, and Evaluation Across 186 Surveys in an Online Opt-...; 2017; Revilla, M.
- Comparative analysis of a mobile device and paper as effective survey tools; 2017; Kim, K. J.; Bae, S.; Park, E.
- Enhancing survey participation: Facebook advertisements for recruitment in educational research; 2017; Forgasz, H.; Tan, H.; Leder, G.; McLeod, A.
- Virtual reality meets sensory research; 2017; Depoortere, L.
- PC, phone or tablet? Use, preference and completion rates for web surveys ; 2017; Brosnan, K.; Gruen, B.; Dolnicar, S.
- “Better do not touch” and other superstitions concerning melanoma: the cross-sectional web...; 2016; Gajda, M.; Kamiñska-Winciorek, G.; Wydmañski, J.; Tukiendorf, A.
- Making use of Internet interactivity to propose a dynamic presentation of web questionnaires; 2016; Revilla, M.; Ochoa, C.; Turbina, A.
- A streamlined approach to online linguistic surveys; 2016; Erlewine, M. Y.; Kotek, H.
- Du kommst hier nicht rein: Türsteherfragen identifizieren nachlässige Teilnehmer in Online-Umfragen; 2016; Merkle, B.; Kaczmirek, L.; Hellwig, O.
- Smartphones vs PCs: Does the Device Affect the Web Survey Experience and the Measurement Error for...; 2016; Toninelli, D.; Revilla, M.
- Estimation and Adjustment of Self-Selection Bias in Volunteer Panel Web Surveys ; 2016; Niu, Ch.
- Sensitive Questions in Online Surveys: An Experimental Evaluation of Different Implementations of the...; 2016; Hoglinger, M.; Jann, B.; Diekmann, A.
- Design and test of a web-survey for collecting observer’s ratings on dairy goats’ behavioural...; 2016; Vieira, A.; Oliveira, M. D.; Nunes, T.; Stilwell, G.
- Can Student Populations in Developing Countries Be Reached by Online Surveys? The Case of the National...; 2016; Langer, A., Meuleman, B., Oshodi, A.-G. T., Schroyens, M.
- Feature phones no barrier to conducting an effective conjoint study ; 2016; de Rooij, R.; Dossin, R.
- Patient preference: a comparison of electronic patient-completed questionnaires with paper among cancer...; 2016; Martin, P.; Brown, M.C.; Espin‐Garcia, O.; Cuffe, S.; Pringle, D.; Mahler, M.; Villeneuve, J.;...
- Does the Use of Smartphones to Participate in Web Surveys Affect the Survey Experience when Sensitive...; 2016; Toninelli, D.; Revilla, M.
- Device use in web surveys: The effect of differential incentives; 2016; Mavletova, A. M.; Couper, M. P.
- Device Effects - How different screen sizes affect answers in online surveys; 2016; Fisher, B.; Bernet, F.
- Do Initial Respondents Differ From Callback Respondents? Lessons From a Mobile CATI Survey; 2016; Vicente, P.; Marques, C.
- The use of online social networks as a promotional tool for self-administered internet surveys; 2016; de Rada, V. D.; Arino, L. V. C; Blasco, M. G
- Assessing the Effects and Effectiveness of Attention-check Questions in Web Surveys: Evidence From a...; 2016; Vannette, D.
- Mode Effects on Subjective Well-being Research: Do they Affect Regression Coefficients? ; 2016; Sanchez Tome, R.; Roberts, C.; Staehli, M. E.; Joye, D.
- Evaluating a Modular Design Approach to Collecting Survey Data Using Text Messages ; 2016; West, B. T.; Ghimire, D.; Axinn, W.
- Reaching the Mobile Generation: Reducing Web Survey Non-response through SMS Reminders ; 2016; Kanitkar, K. N.; Marlar, J.
- Safety First: Ensuring the Anonymity and Privacy of Iranian Panellists’ While Creating Iran...; 2016; Farmanesh, A.; Mohseni, E.
- Non-Observation Bias in an Address-Register-Based CATI/CAPI Mixed Mode Survey; 2016; Lipps, O.
- Web surveys for offline rural communities ; 2016; Gichohi, B. W.
- On-line life history calendar and sensitive topics: A pilot study; 2016; Morselli, D.; Berchtold, A.; Granell, J.-C. S.; Berchtold, And.
- An experiment comparing grids and item-by-item formats in web surveys completed through PCs and smartphones...; 2016; Revilla, M.; Toninelli, D.; Ochoa, C.
- Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes; 2016; Chien, T. S.; Lin, W.S.
- Pre-Survey Text Messages (SMS) Improve Participation Rate in an Australian Mobile Telephone Survey:...; 2016; Dal Grande, E.; Chittleborough, C. R.; Campostrini, S.; Dollard, M.; Taylor, A. W.
- Short and Sweet? Length and Informative Content of Open-Ended Responses Using SMS as a Research Mode; 2016; Walsh, E.; Brinker, J. K.
- Mixing modes of data collection in Swiss social surveys: Methodological report of the LIVES-FORS mixed...; 2016; Roberts, C.; Joye, D.; Staehli, M. E.
- What is the gain in a probability-based online panel to provide Internet access to sampling units that...; 2016; Revilla, M.; Cornilleau, A.; Cousteaux, A-S.; Legleye, S; de Pedraza, P.